Using EASI Sentinel-1 RTC Gamma0 data¶

This notebook demonstrates how to load and use Sentinel-1 Radiometric Terrain Corrected (RTC) Gamma0 data generated in EASI.

The processing uses SNAP-10 with a graph processing tool (GPT) xml receipe for RTC Gamma0 and its variants.

For most uses we recommend the smoothed 20 m product (sentinel1_grd_gamma0_20m). We can process the 10 m products (sentinel1_grd_gamma0_10m, sentinel1_grd_gamma0_10m_unsmooth) on request. Please also ask if you wish to trial other combinations of the parameters.

RTC Gamma0 product variants¶

sentinel1_grd_gamma0_20m sentinel1_grd_gamma0_10m sentinel1_grd_gamma0_10m_unsmooth
DEM
copernicus_dem_30 Y Y Y
Scene to DEM extent multiplier 3.0 3.0 3.0
SNAP operator
Apply-Orbit-File Y Y Y
ThermalNoiseRemoval Y Y Y
Remove-GRD-Border-Noise Y Y Y
Calibration Y Y Y
SetNoDataValue Y Y Y
Terrain-Flattening Y Y Y
Speckle-Filter Y Y N
Multilook Y Y N
Terrain-Correction Y Y Y
Output
Projection WGS84, epsg:4326 WGS84, epsg:4326 WGS84, epsg:4326
Pixel resolution 20 m 10 m 10 m
Pixel alignmentPixelIsArea = top-left PixelIsArea PixelIsArea PixelIsArea

Units and conversions¶

The sentinel1_grd_gamma0_* data are in Intensity units. Intensity can be converted to dB and amplitude, and vice-versa, with the following. Practical Xarray examples are given below.

Intensity to/from dB:

       dB = 10*log10(intensity)
intensity = 10**(dB/10)

Intensity to/from Amplitude:

intensity = amplitude * amplitude
amplitude = sqrt(intensity)

In this notebook we have two functions for xarray datasets/arrays, using numpy.

def intensity_to_db(x):
    return 10*numpy.log10(x)

def db_to_intensity(db):
    return numpy.pow(10, db/10.0)

Reference: https://forum.step.esa.int/t/what-stage-of-processing-requires-the-linear-to-from-db-command

Set up¶

Import required packages and functions¶

In [1]:
# Basic plots
%matplotlib inline
# import matplotlib.pyplot as plt
# plt.rcParams['figure.figsize'] = [12, 8]

# Common imports and settings
import os, sys, re
from pathlib import Path
from IPython.display import Markdown
import pandas as pd
pd.set_option("display.max_rows", None)
import xarray as xr
import numpy as np

# Datacube
import datacube
from datacube.utils.aws import configure_s3_access
import odc.geo.xr                                  # https://github.com/opendatacube/odc-geo
from datacube.utils import masking  # https://github.com/opendatacube/datacube-core/blob/develop/datacube/utils/masking.py
from dea_tools.plotting import display_map, rgb    # https://github.com/GeoscienceAustralia/dea-notebooks/tree/develop/Tools

# EASI tools
import git
repo = git.Repo('.', search_parent_directories=True).working_tree_dir  # This gets the current repo directory. Alternatively replace with the easi-notebooks repo path in your home directory
if repo not in sys.path: sys.path.append(repo)
from easi_tools import EasiDefaults, xarray_object_size
from easi_tools.notebook_utils import mostcommon_crs, initialize_dask, localcluster_dashboard, heading

# Holoviews
import hvplot.xarray
import cartopy.crs as ccrs
In [2]:
# EASI defaults
# These are convenience functions so that the notebooks in this repository work in all EASI deployments
# The `git.Repo()` part returns the local directory that easi-notebooks has been cloned into
# If using the `easi-tools` functions from another path, replace `repo` with your local path to `easi-notebooks` directory

import git
repo = git.Repo('.', search_parent_directories=True).working_tree_dir   # Path to this cloned local directory
if repo not in sys.path: sys.path.append(repo)                          # Add the local path to `easi-notebooks` to python
from easi_tools import EasiDefaults
from easi_tools import initialize_dask, xarray_object_size, mostcommon_crs, heading

EASI defaults¶

These default values are configured for each EASI instance. They help us to use the same training notebooks in each EASI instance. You may find some of the functions convenient for your work or you can easily override the values in your copy of this notebook.

In [3]:
easi = EasiDefaults()

family = 'sentinel-1'
product = easi.product(family)
# product = 'sentinel1_grd_gamma0_20m'
display(Markdown(f'Default {family} product for "{easi.name}": [{product}]({easi.explorer}/products/{product})'))
Successfully found configuration for deployment "asia"

Default sentinel-1 product for "asia": sentinel1_grd_gamma0_20m

Dask cluster¶

Its nearly always worth starting a dask cluster as it can improve data load and processing speed.

In [4]:
# Local cluster
cluster, client = initialize_dask(workers=4)
display(client)

# Or use Dask Gateway - this may take a few minutes
# cluster, client = initialize_dask(use_gateway=True, workers=4)
# display(client)
Successfully found configuration for deployment "asia"

Client

Client-53e0ade1-a7c8-11ef-8b60-12700b876f10

Connection method: Cluster object Cluster type: distributed.LocalCluster
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/8787/status

Cluster Info

LocalCluster

f1163724

Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/8787/status Workers: 4
Total threads: 8 Total memory: 24.00 GiB
Status: running Using processes: True

Scheduler Info

Scheduler

Scheduler-5665ae16-36e0-4586-9963-3e677021cfd8

Comm: tcp://127.0.0.1:33215 Workers: 4
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/8787/status Total threads: 8
Started: Just now Total memory: 24.00 GiB

Workers

Worker: 0

Comm: tcp://127.0.0.1:40535 Total threads: 2
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/33005/status Memory: 6.00 GiB
Nanny: tcp://127.0.0.1:42537
Local directory: /tmp/dask-scratch-space/worker-azxtxypn

Worker: 1

Comm: tcp://127.0.0.1:45457 Total threads: 2
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/41063/status Memory: 6.00 GiB
Nanny: tcp://127.0.0.1:36155
Local directory: /tmp/dask-scratch-space/worker-9g37dpbf

Worker: 2

Comm: tcp://127.0.0.1:36181 Total threads: 2
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/37799/status Memory: 6.00 GiB
Nanny: tcp://127.0.0.1:42667
Local directory: /tmp/dask-scratch-space/worker-9ejixcs2

Worker: 3

Comm: tcp://127.0.0.1:39291 Total threads: 2
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/39757/status Memory: 6.00 GiB
Nanny: tcp://127.0.0.1:38759
Local directory: /tmp/dask-scratch-space/worker-tvw0sbaj

ODC database¶

Connect to the ODC database. The EASI Sentinel-1 data are produced and stored in a local bucket, so the configure_s3_access() function is not required.

In [5]:
dc = datacube.Datacube()

# Access AWS "requester-pays" buckets
# This is necessary for reading data from most third-party AWS S3 buckets such as for Landsat and Sentinel-2
configure_s3_access(aws_unsigned=False, requester_pays=True, client=client);

Example query¶

Change any of the parameters in the query object below to adjust the location, time, projection, or spatial resolution of the returned datasets.

Use the Explorer interface to check the temporal and spatial coverage for each product.

In [6]:
# Explorer link
display(Markdown(f'See: {easi.explorer}/products/{product}'))

# EASI defaults
display(Markdown(f'#### Location: {easi.location}'))
latitude_range = easi.latitude
longitude_range = easi.longitude
time_range = easi.time

# Or set your own latitude / longitude
# Australia GWW
# latitude_range = (-33, -32.6)
# longitude_range = (120.5, 121)
# time_range = ('2020-01-01', '2020-01-31')

# Example: PNG
latitude_range = (-4.26, -3.75)
longitude_range = (144.03, 144.74)
time_range = ('2020-01-01', '2020-05-31')

# Bangladesh
# latitude_range = (21.5, 23.5)
# longitude_range = (89, 90.5)
# time_range = ('2024-05-01', '2024-06-10')

# Vietnam
# (optional target projection = epsg:32648)
# latitude_range = (9.1, 9.9)
# longitude_range = (105.6, 106.4)
# time_range = ('2024-01-01', '2024-09-10')

query = {
    'product': product,       # Product name
    'x': longitude_range,     # "x" axis bounds
    'y': latitude_range,      # "y" axis bounds
    'time': time_range,       # Any parsable date strings
}

# Convenience function to display the selected area of interest
display_map(longitude_range, latitude_range)

See: https://explorer.asia.easi-eo.solutions/products/sentinel1_grd_gamma0_20m

Location: Lake Tempe, Indonesia¶

Out[6]:
Make this Notebook Trusted to load map: File -> Trust Notebook

Load data¶

In [7]:
# Target xarray parameters
# - Select a set of measurements to load
# - output CRS and resolution
# - Usually we group input scenes on the same day to a single time layer (groupby)
# - Select a reasonable Dask chunk size (this should be adjusted depending on the
#   spatial and resolution parameters you choose
load_params = {
    'dask_chunks': {'latitude':2048, 'longitude':2048},      # Dask chunk size
    'group_by': 'solar_day',                    # Group by day method
}

# Load data
data = dc.load(**(query | load_params))
display(xarray_object_size(data))
display(data)
'Dataset size: 5.16 GB'
<xarray.Dataset> Size: 6GB
Dimensions:      (time: 51, latitude: 2550, longitude: 3550)
Coordinates:
  * time         (time) datetime64[ns] 408B 2020-01-03T08:39:20 ... 2020-05-3...
  * latitude     (latitude) float64 20kB -3.75 -3.75 -3.751 ... -4.26 -4.26
  * longitude    (longitude) float64 28kB 144.0 144.0 144.0 ... 144.7 144.7
    spatial_ref  int32 4B 4326
Data variables:
    vh           (time, latitude, longitude) float32 2GB dask.array<chunksize=(1, 2048, 2048), meta=np.ndarray>
    vv           (time, latitude, longitude) float32 2GB dask.array<chunksize=(1, 2048, 2048), meta=np.ndarray>
    angle        (time, latitude, longitude) float32 2GB dask.array<chunksize=(1, 2048, 2048), meta=np.ndarray>
Attributes:
    crs:           EPSG:4326
    grid_mapping:  spatial_ref
xarray.Dataset
    • time: 51
    • latitude: 2550
    • longitude: 3550
    • time
      (time)
      datetime64[ns]
      2020-01-03T08:39:20 ... 2020-05-...
      units :
      seconds since 1970-01-01 00:00:00
      array(['2020-01-03T08:39:20.000000000', '2020-01-04T20:05:09.500000000',
             '2020-01-08T08:47:31.000000000', '2020-01-11T19:56:57.500000000',
             '2020-01-15T08:39:20.000000000', '2020-01-16T20:05:09.500000000',
             '2020-01-20T08:47:31.000000000', '2020-01-23T19:56:56.500000000',
             '2020-01-27T08:39:19.000000000', '2020-01-28T20:05:08.500000000',
             '2020-02-01T08:47:30.000000000', '2020-02-04T19:56:56.500000000',
             '2020-02-08T08:39:19.000000000', '2020-02-09T20:05:08.500000000',
             '2020-02-13T08:47:30.000000000', '2020-02-16T19:56:56.500000000',
             '2020-02-20T08:39:19.000000000', '2020-02-21T20:05:08.500000000',
             '2020-02-25T08:47:30.000000000', '2020-02-28T19:56:55.500000000',
             '2020-03-03T08:39:19.000000000', '2020-03-04T20:05:08.500000000',
             '2020-03-08T08:47:30.000000000', '2020-03-11T19:56:56.500000000',
             '2020-03-15T08:39:19.000000000', '2020-03-16T20:05:08.500000000',
             '2020-03-20T08:47:30.000000000', '2020-03-23T19:56:56.500000000',
             '2020-03-27T08:39:19.000000000', '2020-03-28T20:05:08.500000000',
             '2020-04-01T08:47:30.000000000', '2020-04-04T19:56:56.500000000',
             '2020-04-08T08:39:19.000000000', '2020-04-09T20:05:08.500000000',
             '2020-04-13T08:47:31.000000000', '2020-04-16T19:56:56.500000000',
             '2020-04-20T08:39:20.000000000', '2020-04-21T20:05:09.500000000',
             '2020-04-25T08:47:31.000000000', '2020-04-28T19:56:57.500000000',
             '2020-05-02T08:39:20.000000000', '2020-05-03T20:05:09.500000000',
             '2020-05-07T08:47:32.000000000', '2020-05-10T19:56:57.500000000',
             '2020-05-14T08:39:21.000000000', '2020-05-15T20:05:10.500000000',
             '2020-05-19T08:47:32.500000000', '2020-05-22T19:56:58.500000000',
             '2020-05-26T08:39:22.000000000', '2020-05-27T20:05:11.500000000',
             '2020-05-31T08:47:33.000000000'], dtype='datetime64[ns]')
    • latitude
      (latitude)
      float64
      -3.75 -3.75 -3.751 ... -4.26 -4.26
      units :
      degrees_north
      resolution :
      -0.0002
      crs :
      EPSG:4326
      array([-3.7501, -3.7503, -3.7505, ..., -4.2595, -4.2597, -4.2599])
    • longitude
      (longitude)
      float64
      144.0 144.0 144.0 ... 144.7 144.7
      units :
      degrees_east
      resolution :
      0.0002
      crs :
      EPSG:4326
      array([144.0301, 144.0303, 144.0305, ..., 144.7395, 144.7397, 144.7399])
    • spatial_ref
      ()
      int32
      4326
      spatial_ref :
      GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]]
      grid_mapping_name :
      latitude_longitude
      array(4326, dtype=int32)
    • vh
      (time, latitude, longitude)
      float32
      dask.array<chunksize=(1, 2048, 2048), meta=np.ndarray>
      units :
      intensity
      nodata :
      nan
      crs :
      EPSG:4326
      grid_mapping :
      spatial_ref
      Array Chunk
      Bytes 1.72 GiB 16.00 MiB
      Shape (51, 2550, 3550) (1, 2048, 2048)
      Dask graph 204 chunks in 1 graph layer
      Data type float32 numpy.ndarray
      3550 2550 51
    • vv
      (time, latitude, longitude)
      float32
      dask.array<chunksize=(1, 2048, 2048), meta=np.ndarray>
      units :
      intensity
      nodata :
      nan
      crs :
      EPSG:4326
      grid_mapping :
      spatial_ref
      Array Chunk
      Bytes 1.72 GiB 16.00 MiB
      Shape (51, 2550, 3550) (1, 2048, 2048)
      Dask graph 204 chunks in 1 graph layer
      Data type float32 numpy.ndarray
      3550 2550 51
    • angle
      (time, latitude, longitude)
      float32
      dask.array<chunksize=(1, 2048, 2048), meta=np.ndarray>
      units :
      degrees
      nodata :
      nan
      crs :
      EPSG:4326
      grid_mapping :
      spatial_ref
      Array Chunk
      Bytes 1.72 GiB 16.00 MiB
      Shape (51, 2550, 3550) (1, 2048, 2048)
      Dask graph 204 chunks in 1 graph layer
      Data type float32 numpy.ndarray
      3550 2550 51
    • time
      PandasIndex
      PandasIndex(DatetimeIndex([       '2020-01-03 08:39:20', '2020-01-04 20:05:09.500000',
                            '2020-01-08 08:47:31', '2020-01-11 19:56:57.500000',
                            '2020-01-15 08:39:20', '2020-01-16 20:05:09.500000',
                            '2020-01-20 08:47:31', '2020-01-23 19:56:56.500000',
                            '2020-01-27 08:39:19', '2020-01-28 20:05:08.500000',
                            '2020-02-01 08:47:30', '2020-02-04 19:56:56.500000',
                            '2020-02-08 08:39:19', '2020-02-09 20:05:08.500000',
                            '2020-02-13 08:47:30', '2020-02-16 19:56:56.500000',
                            '2020-02-20 08:39:19', '2020-02-21 20:05:08.500000',
                            '2020-02-25 08:47:30', '2020-02-28 19:56:55.500000',
                            '2020-03-03 08:39:19', '2020-03-04 20:05:08.500000',
                            '2020-03-08 08:47:30', '2020-03-11 19:56:56.500000',
                            '2020-03-15 08:39:19', '2020-03-16 20:05:08.500000',
                            '2020-03-20 08:47:30', '2020-03-23 19:56:56.500000',
                            '2020-03-27 08:39:19', '2020-03-28 20:05:08.500000',
                            '2020-04-01 08:47:30', '2020-04-04 19:56:56.500000',
                            '2020-04-08 08:39:19', '2020-04-09 20:05:08.500000',
                            '2020-04-13 08:47:31', '2020-04-16 19:56:56.500000',
                            '2020-04-20 08:39:20', '2020-04-21 20:05:09.500000',
                            '2020-04-25 08:47:31', '2020-04-28 19:56:57.500000',
                            '2020-05-02 08:39:20', '2020-05-03 20:05:09.500000',
                            '2020-05-07 08:47:32', '2020-05-10 19:56:57.500000',
                            '2020-05-14 08:39:21', '2020-05-15 20:05:10.500000',
                     '2020-05-19 08:47:32.500000', '2020-05-22 19:56:58.500000',
                            '2020-05-26 08:39:22', '2020-05-27 20:05:11.500000',
                            '2020-05-31 08:47:33'],
                    dtype='datetime64[ns]', name='time', freq=None))
    • latitude
      PandasIndex
      PandasIndex(Index([            -3.7501,             -3.7503,             -3.7505,
                         -3.7507,             -3.7509,             -3.7511,
                         -3.7513,             -3.7515,             -3.7517,
                         -3.7519,
             ...
              -4.258100000000001,             -4.2583,  -4.258500000000001,
                         -4.2587,  -4.258900000000001,             -4.2591,
             -4.2593000000000005,             -4.2595, -4.2597000000000005,
                         -4.2599],
            dtype='float64', name='latitude', length=2550))
    • longitude
      PandasIndex
      PandasIndex(Index([          144.0301,           144.0303, 144.03050000000002,
                       144.0307,           144.0309,           144.0311,
             144.03130000000002,           144.0315,           144.0317,
                       144.0319,
             ...
                       144.7381,           144.7383, 144.73850000000002,
                       144.7387,           144.7389,           144.7391,
             144.73930000000001,           144.7395,           144.7397,
                       144.7399],
            dtype='float64', name='longitude', length=3550))
  • crs :
    EPSG:4326
    grid_mapping :
    spatial_ref
In [8]:
# When happy with the shape and size of chunks, persist() the result
data = data.persist()

Conversion and helper functions¶

In [9]:
# These functions use numpy, which should be satisfactory for most notebooks.
# Calculations for larger or more complex arrays may require Xarray's "ufunc" capability.
# https://docs.xarray.dev/en/stable/examples/apply_ufunc_vectorize_1d.html
#
# Apply numpy.log10 to the DataArray
# log10_data = xr.apply_ufunc(np.log10, data)

def intensity_to_db(da: 'xr.DataArray'):
    """Return an array converted to dB values"""
    xx = da.where(da > 0, np.nan)  # Set values <= 0 to NaN
    xx = 10*np.log10(xx)
    xx.attrs.update({"units": "dB"})
    return xx

def db_to_intensity(da: 'xr.DataArray'):
    """Return an array converted to intensity values"""
    xx = np.power(10, da/10.0)
    xx.attrs.update({"units": "intensity"})
    return xx

def make_image(ds: 'xarray', frame_height=300, **kwargs):
    """Return a Holoviews object that can be displayed or combined"""
    spatial_dims = ds.odc.spatial_dims
    defaults = dict(
        cmap="Greys_r",
        y = spatial_dims[0], x = spatial_dims[1],
        rasterize = True,
        geo = True,
        frame_height = frame_height,
        clabel = ds.attrs.get('units', None),
    )
    defaults.update(**kwargs)
    return ds.hvplot.image(**defaults)

def select_valid_time_layers(ds: 'xarray', percent: float = 5):
    """Select time layers that have at least a given percentage of valid data (e.g., >=5%)

    Example usage:
      selected = select_valid_time_layers(ds, percent=5)
      filtered == ds.sel(time=selected)
    """
    spatial_dims = ds.odc.spatial_dims
    return ds.count(dim=spatial_dims).values / (ds.sizes[spatial_dims[0]]*ds.sizes[spatial_dims[1]]) >= (percent/100.0)

# Examples to check that the intensity to/from dB functions work as expected
# xx = data.vv.isel(time=0,latitude=np.arange(0, 5),longitude=np.arange(0, 5))
# xx[0] = 0
# xx[1] = -0.001
# display(xx.values)
# yy = intensity_to_db(xx)
# display(yy.values)
# zz = db_to_intensity(yy)
# display(zz.values)
In [10]:
# Optional time layer filter

selected = select_valid_time_layers(data.vv, 10)
data = data.sel(time=selected).persist()
/env/lib/python3.12/site-packages/rasterio/warp.py:387: NotGeoreferencedWarning: Dataset has no geotransform, gcps, or rpcs. The identity matrix will be returned.
  dest = _reproject(
/env/lib/python3.12/site-packages/rasterio/warp.py:387: NotGeoreferencedWarning: Dataset has no geotransform, gcps, or rpcs. The identity matrix will be returned.
  dest = _reproject(
/env/lib/python3.12/site-packages/rasterio/warp.py:387: NotGeoreferencedWarning: Dataset has no geotransform, gcps, or rpcs. The identity matrix will be returned.
  dest = _reproject(
/env/lib/python3.12/site-packages/rasterio/warp.py:387: NotGeoreferencedWarning: Dataset has no geotransform, gcps, or rpcs. The identity matrix will be returned.
  dest = _reproject(
In [11]:
# Add db values to the dataset

data['vh_db'] = intensity_to_db(data.vh).persist()
data['vv_db'] = intensity_to_db(data.vv).persist()

Plot the data¶

Note the different data ranges for plotting (clim) between vv, vh, intensity and dB.

The two polarisations will tend to discriminate different features or characteristics in the landscape, such as flooded areas or vegeation structure.

Intensity

In [12]:
# A single time layer for VV and VH, with linked axes

idx = 0
time_label = data.time.dt.strftime("%Y-%m-%d %H:%M:%S").values[idx]
vvplot = make_image(data.vv.isel(time=idx), clim=(0, 0.5), title=f'VV ({time_label})')
vhplot = make_image(data.vh.isel(time=idx), clim=(0, 0.1), title=f'VH ({time_label})')
vvplot + vhplot
Out[12]:
In [13]:
# Make a dB plot

vvplot = make_image(data.vv_db.isel(time=idx), clim=(-30, -3), title=f'VV ({time_label})')
vhplot = make_image(data.vh_db.isel(time=idx), clim=(-30, -1), title=f'VH ({time_label})')
vvplot + vhplot
Out[13]:
In [14]:
# Subplots for each time layer for VV, with linked axes

make_image(data.vh_db, clim=(-30, -3), robust=True).layout().cols(4)
Out[14]:

Plot a histogram of the dB data¶

A dB histogram can help separate water from land.

In [15]:
data.vh_db.plot.hist(bins=np.arange(-30, 0, 1), facecolor='red')
Out[15]:
(array([3.3782640e+06, 4.3711890e+06, 4.8626780e+06, 4.2125330e+06,
        2.6656980e+06, 1.2789150e+06, 5.1724600e+05, 2.4158200e+05,
        1.6086000e+05, 1.4170400e+05, 1.5716700e+05, 2.1627800e+05,
        3.8231100e+05, 8.8000700e+05, 2.6164480e+06, 9.9989850e+06,
        2.9853520e+07, 3.3506531e+07, 9.2999550e+06, 1.1545160e+06,
        4.2607400e+05, 1.8963100e+05, 6.1783000e+04, 1.4306000e+04,
        2.6460000e+03, 4.6000000e+02, 1.0900000e+02, 6.7000000e+01,
        5.3000000e+01]),
 array([-30., -29., -28., -27., -26., -25., -24., -23., -22., -21., -20.,
        -19., -18., -17., -16., -15., -14., -13., -12., -11., -10.,  -9.,
         -8.,  -7.,  -6.,  -5.,  -4.,  -3.,  -2.,  -1.]),
 <BarContainer object of 29 artists>)
No description has been provided for this image

Make an RGB image¶

For an RGB visualization we use the ratio between VH and VV.

In [16]:
# Add the vh/vv band to represent 'blue'
data['vh_vv'] = data.vh / data.vv

# Scale the measurements by their median so they have a similar range for visualization
spatial_dims = data.odc.spatial_dims
data['vh_scaled'] = data.vh / data.vh.median(dim=spatial_dims).persist()
data['vv_scaled'] = data.vv / data.vv.median(dim=spatial_dims).persist()
data['vh_vv_scaled'] = data.vh_vv / data.vh_vv.median(dim=spatial_dims).persist()

data['vh_vv_db'] = data.vh_db / data.vv_db
data['vh_db_scaled'] = data.vh_db / data.vh_db.median(dim=spatial_dims).persist()
data['vv_db_scaled'] = data.vv_db / data.vv_db.median(dim=spatial_dims).persist()
data['vh_vv_db_scaled'] = data.vh_vv_db / data.vh_vv_db.median(dim=spatial_dims).persist()

def rgb_image(ds):
    xx = ds.hvplot.rgb(
        bands='band',
        y = spatial_dims[0], x = spatial_dims[1],
        groupby='time', rasterize=True,
        geo=True,
        title='RGB', frame_height=500,
    )
    return xx
In [17]:
# RGB plot using a DEA Tools function
rgb(data, bands=['vh_scaled','vv_scaled','vh_vv_scaled'], col="time")
No description has been provided for this image
In [18]:
rgb_image(
    data.odc.to_rgba(bands=['vh','vv','vh_vv'], vmin=0, vmax=0.5)
)
Out[18]:
In [19]:
rgb_image(
    data.odc.to_rgba(bands=['vh_scaled','vv_scaled','vh_vv_scaled'], vmin=0, vmax=2)
)
Out[19]:
In [20]:
# rgb_image(
#     data.odc.to_rgba(bands=['vh_db','vv_db','vh_vv_db'], vmin=0, vmax=2)
# )
In [21]:
rgb_image(
    data.odc.to_rgba(bands=['vh_db_scaled','vv_db_scaled','vh_vv_db_scaled'], vmin=0, vmax=2)
)
Out[21]:
In [22]:
# Experimental - Use Holoviews

# Create an RGB array, and persist it on the dask cluster
# rgb_ds = xr.concat([med.vv, med.vh, med.vh_vv], 'channel').rename('rgb').to_dataset().persist()

# # Plot the RGB
# rgb_plot = rgb_ds.hvplot.rgb(
#     bands='channel',
#     groupby='time', rasterize=True,
#     geo=True,
#     title='RGB', frame_height=500,
# )

# rgb_plot  # + vv_plot + vh_plot

Export to Geotiffs¶

Recall that to write a dask dataset to a file requires the dataset to be .compute()ed. This may result in a large memory increase on your JupyterLab node if the area of interest is large enough, which in turn may kill the kernel. If so then skip this step, choose a smaller area or find a different way to export data.

In [ ]:
# Make a directory to save outputs to
target = Path.home() / 'output'
if not target.exists(): target.mkdir()

def write_band(ds, varname):
    """Write the variable name of the xarray dataset to a Geotiff files for each time layer"""
    for i in range(len(ds.time)):
        date = ds[varname].isel(time=i).time.dt.strftime('%Y%m%d').data
        single = ds[varname].isel(time=i).compute()
        write_cog(geo_im=single, fname=f'{target}/example_sentinel-1_{varname}_{date}.tif', overwrite=True)
        
write_band(data, 'vv')
write_band(data, 'vh')
# write_band(rgb_da, 'rgb')
In [ ]: